Goto

Collaborating Authors

 processing chain


Matching Problems to Solutions: An Explainable Way of Solving Machine Learning Problems

Saleh, Lokman, Mili, Hafedh, Boukadoum, Mounir

arXiv.org Artificial Intelligence

Domain experts from all fields are called upon, working with data scientists, to explore the use of ML techniques to solve their problems. Starting from a domain problem/question, ML-based problemsolving typically involves three steps: (1) formulating the business problem (problem domain) as a data analysis problem (solution domain), (2) sketching a high-level ML-based solution pattern, given the domain requirements and the properties of the available data, and (3) designing and refining the different components of the solution pattern. There has to be a substantial body of ML problem solving knowledge that ML researchers agree on, and that ML practitioners routinely apply to solve the most common problems. Our work deals with capturing this body of knowledge, and embodying it in a ML problem solving workbench to helps domain specialists who are not ML experts to explore the ML solution space. This paper focuses on: 1) the representation of domain problems, ML problems, and the main ML solution artefacts, and 2) a heuristic matching function that helps identify the ML algorithm family that is most appropriate for the domain problem at hand, given the domain (expert) requirements, and the characteristics of the training data. We review related work and outline our strategy for validating the workbench.


SLIMBRAIN: Augmented Reality Real-Time Acquisition and Processing System For Hyperspectral Classification Mapping with Depth Information for In-Vivo Surgical Procedures

Sancho, Jaime, Villa, Manuel, Chavarrías, Miguel, Juarez, Eduardo, Lagares, Alfonso, Sanz, César

arXiv.org Artificial Intelligence

Over the last two decades, augmented reality (AR) has led to the rapid development of new interfaces in various fields of social and technological application domains. One such domain is medicine, and to a higher extent surgery, where these visualization techniques help to improve the effectiveness of preoperative and intraoperative procedures. Following this trend, this paper presents SLIMBRAIN, a real-time acquisition and processing AR system suitable to classify and display brain tumor tissue from hyperspectral (HS) information. This system captures and processes HS images at 14 frames per second (FPS) during the course of a tumor resection operation to detect and delimit cancer tissue at the same time the neurosurgeon operates. The result is represented in an AR visualization where the classification results are overlapped with the RGB point cloud captured by a LiDAR camera. This representation allows natural navigation of the scene at the same time it is captured and processed, improving the visualization and hence effectiveness of the HS technology to delimit tumors. The whole system has been verified in real brain tumor resection operations.


Pentagon Calls for New Ideas in 'Third Wave' of AI Evolution

#artificialintelligence

A key research and development agency within the Department of Defense is accepting new contract proposals specifically focused on advancing algorithmic processing within Defense's artificial intelligence projects. The Defense Advanced Research Projects Agency is formally soliciting contracts for its new Enabling Confidence program, a subsect within its Artificial Intelligence Exploration initiative. The AIE focuses on what DARPA defines as its "third wave" of artificial intelligence research, which includes AI theory and application research that examines limitations with rule and statistical learning theories belying AI technologies. "The pace of discovery in AI science and technology is accelerating worldwide," the program announcement says. "AIE will enable DARPA to fund pioneering AI research to discover new areas where R&D programs awarded through this new approach may be able to advance the state of the art."


Radar Image Reconstruction from Raw ADC Data using Parametric Variational Autoencoder with Domain Adaptation

Stephan, Michael, Stadelmayer, Thomas, Santra, Avik, Fischer, Georg, Weigel, Robert, Lurz, Fabian

arXiv.org Artificial Intelligence

This paper presents a parametric variational autoencoder-based human target detection and localization framework working directly with the raw analog-to-digital converter data from the frequency modulated continous wave radar. We propose a parametrically constrained variational autoencoder, with residual and skip connections, capable of generating the clustered and localized target detections on the range-angle image. Furthermore, to circumvent the problem of training the proposed neural network on all possible scenarios using real radar data, we propose domain adaptation strategies whereby we first train the neural network using ray tracing based model data and then adapt the network to work on real sensor data. This strategy ensures better generalization and scalability of the proposed neural network even though it is trained with limited radar data. We demonstrate the superior detection and localization performance of our proposed solution compared to the conventional signal processing pipeline and earlier state-of-art deep U-Net architecture with range-doppler images as inputs


Flexible Approach for Computer-Assisted Reading and Analysis of Texts

Biskri, Ismaïl (Universié du Québec à Trois-Rivières) | Hassani, Mohamed (Universié du Québec à Trois-Rivières)

AAAI Conferences

A Computer-Assisted Reading and Analysis of Texts (CARAT) process is a complex technology that connects language, text, information and knowledge theories with computational formalizations, statistical approaches, symbolic approaches, standard and non-standard logics, etc. This process should be, always, under the control of the user according to his subjectivity, his knowledge and the purpose of his analysis. It becomes important to design platforms to support the design of CARAT tools, their management, their adaptation to new needs and the experiments. Even, in the last years, several platforms for digging data, including textual data have emerged; they lack flexibility and sound formal foundations. We propose, in this paper, a formal model with strong logical foundations, based on typed applicative systems.


Network Effects: In 2019 IoT And 5G Will Push AI To The Very Edge

#artificialintelligence

Almost thirty years ago, when the internet was launched onto an unsuspecting world, even inventor Tim Berners-Lee and colleagues at CERN could not have predicted the upheaval that would follow. It has been the greatest technology revolution since the industrial original. The combination of Cloud, IoT and AI is driving opportunity and threat in equal measure. Decisions made within organizations will have an impact for years to come. The way in which IoT-Edge links to the broader cloud backend, and the way in which AI integrates across the full processing chain, will be the key to unlocking material innovation and value. After many years of rationalization and stretched infrastructure investments, the IoT represents a tipping point for telcos, the cellular networks on whose backbones the new IoT offerings will be delivered.


Artificial Intelligence: A Core Element of the Nuxeo Vision

#artificialintelligence

Like many in my generation, I grew up watching the Jetsons, and the idea of a maid robot (like Rosie) was appealing for obvious reasons. I now have a robot that can vacuum my apartment and a machine that washes my dishes. While I don't have Rosie doing these manual tasks for me, technology is indeed automating mundane tasks in my home... The vision that science fiction and Hollywood sells to us as it relates to Artificial Intelligence (AI) is the one of a fully-functional humanoid robot, or a computer with human intelligence. They might want to kill you (the Terminator, or H.A.L. from 2001: A Space Odyessy) or they might be here to help you (Data from Star Trek: The Next Generation), but they are always highly cognitive machines, sometimes with human like personalities with emotions included (sorry Data!).


Agreement Asymmetries in Arabic from a Categorical Perspective

Biskri, Ismaïl (Universite du Quebec a Trois-Rivieres) | Jebali, Adel (Concordia University)

AAAI Conferences

Agreement asymmetries are the most debated issue in Arabic linguistics. Even though the facts suggest a unified treatment based on the properties of agreement, most of the researchers in this field don’t take into account the essential difference between grammatical agreement and anaphoric agreement. We do propose such a distinction to explain these asymmetries and we propose an analysis that we implement in the ACCG framework.